Goto

Collaborating Authors

 unsupported claim


Around one-third of AI search tool answers make unsupported claims

New Scientist

AI tools including Perplexity and Open AI's GPT-4 often provide one-sided answers to contentious questions, and don't back up their arguments with reliable sources How well-supported are the claims made by AI tools? Generative AI tools, and the deep research agents and search engines powered by them, frequently make unsupported and biased claims that aren't backed up by the sources they cite. That's according to an analysis which found that about one-third of answers provided by the AI tools aren't backed up by reliable sources. For OpenAI's GPT 4.5, the figure was even higher, at 47 per cent. Alongside this, they put five deep research agents through their paces: GPT-5's Deep Research feature, Bing Chat's Think Deeper option and deep research tools offered by You.com, Google Gemini and Perplexity.


Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown

Tu, Lifu, Meng, Rui, Joty, Shafiq, Zhou, Yingbo, Yavuz, Semih

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated strong capabilities in text understanding and generation. However, they often lack factuality, producing a mixture of true and false information, especially in long-form generation. In this work, we investigates the factuality of long-form text generation across various large language models (LLMs), including GPT-4, Gemini-1.5-Pro, Our analysis reveals that factuality scores tend to decline in later sentences of the generated text, accompanied by a rise in the number of unsupported claims. Furthermore, we explore the effectiveness of different evaluation settings to assess whether LLMs can accurately judge the correctness of their own outputs: Self-Known (the percentage of supported atomic claims, decomposed from LLM outputs, that the corresponding LLMs judge as correct) and Self-Unknown (the percentage of unsupported atomic claims that the corresponding LLMs judge as incorrect). The results indicate that even advanced models like GPT-4 and Gemini-1.5-Pro Moreover, we find a correlation between higher Self-Known scores and improved factuality, while higher Self-Unknown scores are associated with lower factuality. These findings show the limitations of current LLMs in long-form generation, and provide valuable insights for improving factuality in long-form text generation. The long-context capabilities of large language models (LLMs) (OpenAI, 2023b; AI@Meta, 2024; Jiang et al., 2024; GeminiTeam, 2024; Anthropic, 2024) have seen significant advancements in recent years. Lots of work (Shaham et al., 2023; Bai et al., 2024; An et al., 2024; Zhang et al., 2024; Kuratov et al., 2024) have explored the ability of LLMs to handle long contexts, however, relatively few have examined their ability for long-form text generation.


f7e6c85504ce6e82442c770f7c8606f0-Reviews.html

Neural Information Processing Systems

The title of this paper is much like the paper itself: to-the-point, descriptive, and readable. "A simple example of Dirichlet process mixture inconsistency for the number of components" delivers on its promise by providing two easy-to-understand demonstrations of the severity of the problem of using Dirichlet process mixtures to estimate the number of components in a mixture model. The authors start by demonstrating that making such a component-cardinality estimate is widespread in the literature (and therefore a problem deserving of interest), briefly describe the Dirichlet process mixture (DPM) model (with particular emphasis on the popular normal likelihood case), and then demonstrate with a simple single-component mixture example how poorly estimation of component cardinality can go (their convincing answer: very poorly). Not only was the paper enjoyable to read but, refreshingly, didn't try to fit 20 pages of material into an 8 page limit. One potential criticism of this paper is that this result should be well-known in some sense in the community.